Adaptive cubic overestimation methods for unconstrained optimization. Part II: worst-case iteration complexity
نویسندگان
چکیده
An Adaptive Cubic Overestimation (ACO) framework for unconstrained optimization was proposed and analysed in Cartis, Gould & Toint (Part I, 2007). In this companion paper, we further the analysis by providing worst-case global iteration complexity bounds for ACO and a second-order variant to achieve approximate first-order, and for the latter even second-order, criticality of the iterates. In particular, the second-order ACO algorithm requires at mostO(ǫ) iterations to drive the objective’s gradient below the desired accuracy ǫ, and O(ǫ), to reach approximate nonnegative curvature in a subspace. The orders of these bounds match those proved by Nesterov & Polyak (Math. Programming 108(1), 2006, pp 177-205) for their Algorithm 3.3 which minimizes the cubic model globally on each iteration. Our approach is more general, and relevant to practical (large-scale) calculations, as ACO allows the cubic model to be solved only approximately and may employ approximate Hessians.
منابع مشابه
Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function- and derivative-evaluation complexity
An Adaptive Regularisation framework using Cubics (ARC) was proposed for unconstrained optimization and analysed in Cartis, Gould & Toint (Part I, 2007). In this companion paper, we further the analysis by providing worst-case global iteration complexity bounds for ARC and a second-order variant to achieve approximate first-order, and for the latter even second-order, criticality of the iterate...
متن کاملComplexity bounds for second-order optimality in unconstrained optimization
This paper examines worst-case evaluation bounds for finding weak minimizers in unconstrained optimization. For the cubic regularization algorithm, Nesterov and Polyak (2006) [15] and Cartis et al. (2010) [3] show that at most O(ε−3) iterations may have to be performed for finding an iterate which is within ε of satisfying second-order optimality conditions. We first show that this bound can be...
متن کاملEvaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization
The adaptive cubic regularization algorithms described in Cartis, Gould & Toint (2009, 2010) for unconstrained (nonconvex) optimization are shown to have improved worst-case efficiency in terms of the functionand gradient-evaluation count when applied to convex and strongly convex objectives. In particular, our complexity upper bounds match in order (as a function of the accuracy of approximati...
متن کاملLANCS Workshop on Modelling and Solving Complex Optimisation Problems
Towards optimal Newton-type methods for nonconvex smooth optimization Coralia Cartis Coralia.Cartis (at) ed.ac.uk School of Mathematics, Edinburgh University We show that the steepest-descent and Newton methods for unconstrained non-convex optimization, under standard assumptions, may both require a number of iterations and function evaluations arbitrarily close to the steepest-descent’s global...
متن کاملOptimal Newton-type methods for nonconvex smooth optimization problems
We consider a general class of second-order iterations for unconstrained optimization that includes regularization and trust-region variants of Newton’s method. For each method in this class, we exhibit a smooth, bounded-below objective function, whose gradient is globally Lipschitz continuous within an open convex set containing any iterates encountered and whose Hessian is α−Hölder continuous...
متن کامل